105 research outputs found

    Lossy Source Coding with Reconstruction Privacy

    Full text link
    We consider the problem of lossy source coding with side information under a privacy constraint that the reconstruction sequence at a decoder should be kept secret to a certain extent from another terminal such as an eavesdropper, a sender, or a helper. We are interested in how the reconstruction privacy constraint at a particular terminal affects the rate-distortion tradeoff. In this work, we allow the decoder to use a random mapping, and give inner and outer bounds to the rate-distortion-equivocation region for different cases where the side information is available non-causally and causally at the decoder. In the special case where each reconstruction symbol depends only on the source description and current side information symbol, the complete rate-distortion-equivocation region is provided. A binary example illustrating a new tradeoff due to the new privacy constraint, and a gain from the use of a stochastic decoder is given.Comment: 22 pages, added proofs, to be presented at ISIT 201

    Source Coding Problems with Conditionally Less Noisy Side Information

    Full text link
    A computable expression for the rate-distortion (RD) function proposed by Heegard and Berger has eluded information theory for nearly three decades. Heegard and Berger's single-letter achievability bound is well known to be optimal for \emph{physically degraded} side information; however, it is not known whether the bound is optimal for arbitrarily correlated side information (general discrete memoryless sources). In this paper, we consider a new setup in which the side information at one receiver is \emph{conditionally less noisy} than the side information at the other. The new setup includes degraded side information as a special case, and it is motivated by the literature on degraded and less noisy broadcast channels. Our key contribution is a converse proving the optimality of Heegard and Berger's achievability bound in a new setting. The converse rests upon a certain \emph{single-letterization} lemma, which we prove using an information theoretic telescoping identity {recently presented by Kramer}. We also generalise the above ideas to two different successive-refinement problems

    New Privacy Mechanism Design With Direct Access to the Private Data

    Full text link
    The design of a statistical signal processing privacy problem is studied where the private data is assumed to be observable. In this work, an agent observes useful data YY, which is correlated with private data XX, and wants to disclose the useful information to a user. A statistical privacy mechanism is employed to generate data UU based on (X,Y)(X,Y) that maximizes the revealed information about YY while satisfying a privacy criterion. To this end, we use extended versions of the Functional Representation Lemma and Strong Functional Representation Lemma and combine them with a simple observation which we call separation technique. New lower bounds on privacy-utility trade-off are derived and we show that they can improve the previous bounds. We study the obtained bounds in different scenarios and compare them with previous results.Comment: arXiv admin note: substantial text overlap with arXiv:2201.08738, arXiv:2212.1247

    Multi-User Privacy Mechanism Design with Non-zero Leakage

    Full text link
    A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data Y=(Y1,...,YN)Y=(Y_1,...,Y_N) that is correlated with private data X=(X1,...,XN)X=(X_1,...,X_N) which is assumed to be also accessible by the agent. Here, we consider KK users where user ii demands a sub-vector of YY, denoted by CiC_{i}. The agent wishes to disclose CiC_{i} to user ii. Since CiC_{i} is correlated with XX it can not be disclosed directly. A privacy mechanism is designed to generate disclosed data UU which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that XiX_i is a deterministic function of YiY_i, however in this work we let XiX_i and YiY_i be arbitrarily correlated. First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representaion Lemma, then we show that the upper bound can be decomposed into NN parallel problems. Next, lower bounds on privacy-utility trade-off are derived using Functional Representation Lemma and Strong Functional Representaion Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all {Xj}i=1N\{X_j\}_{i=1}^N except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.Comment: arXiv admin note: text overlap with arXiv:2205.04881, arXiv:2201.0873
    • …
    corecore